Snapshots in Hadoop Distributed File System

نویسندگان

  • Sameer Agarwal
  • Dhruba Borthakur
  • Ion Stoica
چکیده

The ability to take snapshots is an essential functionality of any file system, as snapshots enable system administrators to perform data backup and recovery in case of failure. We present a low-overhead snapshot solution for HDFS, a popular distributed file system for large clusters of commodity servers. Our solution obviates the need for complex distributed snapshot algorithms, by taking advantage of the centralized architecture of the HDFS control plane which stores all file metadata on a single node, and alleviates the need for expensive copy-on write operations by taking advantage of the HDFS limited interface that restricts the write operations to append and truncate only. Furthermore, our solution employs new snapshot data structures to address the inherent challenges related to data replication and distribution in HDFS. In this paper, we have designed, implemented and evaluated a fast and efficient snapshot solution based on selective-copyon-appends that is specifically suited for HDFS like distributed file systems.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adaptive Dynamic Data Placement Algorithm for Hadoop in Heterogeneous Environments

Hadoop MapReduce framework is an important distributed processing model for large-scale data intensive applications. The current Hadoop and the existing Hadoop distributed file system’s rack-aware data placement strategy in MapReduce in the homogeneous Hadoop cluster assume that each node in a cluster has the same computing capacity and a same workload is assigned to each node. Default Hadoop d...

متن کامل

An Efficient Approach to Optimize the Performance of Massive Small Files in Hadoop MapReduce Framework

The most popular open source distributed computing framework called Hadoop was designed by Doug Cutting and his team, which involves thousands of nodes to process and analyze huge amounts of data called Big Data. The major core components of Hadoop are HDFS (Hadoop Distributed File System) and MapReduce. This framework is the most popular and powerful for store, manage and process Big Data appl...

متن کامل

Ceph as a scalable alternative to the Hadoop Distributed File System

[email protected] THE HADOOP D I S TR I BUTED F I L E System (HDFS) has a single metadata server that sets a hard limit on its maximum size. Ceph, a high-performance distributed file system under development since 2005 and now supported in Linux, bypasses the scaling limits of HDFS. We describe Ceph and its elements and provide instructions for installing a demonstration system that can be used...

متن کامل

Comparing Hadoop and Fat-Btree Based Access Method for Small File I/O Applications

Hadoop has been widely used in various clusters to build scalable and high performance distributed file systems. However, Hadoop distributed file system (HDFS) is designed for large file management. In case of small files applications, those metadata requests will flood the network and consume most of the memory in Namenode thus sharply hinders its performance. Therefore, many web applications ...

متن کامل

Architecture for Hadoop Distributed File Systems

The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications. In a large cluster, thousands of servers both host directly attached storage and execute user application tasks. By distributing storage and computation across many servers, the resource can grow with demand while remaining economica...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011